A year ago I argued that as AI algorithms structure more and more decisions, “last mile” human judgment would become a luxury good available to those who could pay for concierge discernment. The 5-15% “last mile” space is what algorithms can’t capture, where fine tuned human judgement is needed.
The “last mile” in higher education has always been personalized attention, and its outcome in the AI era is systematic acceleration. Students trained to run a two-minute mile will soon be arriving in a system built for the four-minute mile. The entire U.S. undergraduate university structure, from the Carnegie Unit to the four-year degree, is calibrated for a steady, predictable pace. A student who can master content at twice the expected speed creates a structural challenge I call the “two-minute mile” or “fast mile” problem.
As Alpha School is demonstrating, personalized specialized attention yields students learning at 2.6x to 5x rates. Read the New York Times piece and the epic Astral Codex Ten review. (Also read Benjamin Bloom’s 1984 paper on what personalized tutoring and immediate feedback can do).
Acceleration matters beyond classroom practice. Once learning velocity is systematically measured and institutionalized, the entire temporal organization of education collapses. Age-graded cohorts, semester schedules, and degree timelines are all based on the idea that learning happens at a predictable, standardized pace. Of course this was never true. But AI both enables and amplifies this variation.
No state in the U.S. currently measures learning velocity and I’ve found no studies focused on the measurement of accelerated learning. Under the Every Student Succeeds Act (ESSA), states measure whether students meet proficiency benchmarks and show improvement compared to prior years. These tests track learning from fixed points, assuming a roughly linear learning curve over the school year. Accountability systems also monitor chronic absenteeism, which functions as a proxy indicator for interrupted learning, not (quite possibly) time spent outside the classroom with AI.
Measuring acceleration would require continuous tracking of the rate of change, how quickly a student’s learning slope is steepening or flattening over shorter intervals. That kind of measurement is harder to operationalize, though some adaptive platforms, such as Khanmigo and NWEA, are beginning to approximate it. No ESSA-compliant state test currently quantifies changes in learning velocity.
Over 92% of high school students now use generative AI (GenAI) for coursework, authorized or not. Most of this is unstructured: essay drafts, problem sets, exam prep, on-demand tutoring without pedagogical design. But even unstructured GenAI use changes student expectations about learning pace, feedback loops, and the role of human instruction. Students are learning that knowledge is abundant and instantly accessible, that feedback can be immediate. While only a few years ago nearly 100% of course content was provided by the teacher (under state and school district oversight), soon it could be 50% or less. These students will arrive at college seeing GenAI as a resource and route to knowledge outside the classroom. They have also come to expect immediate feedback and unlimited patience from their learning tools.
If a growing cohort of “fast mile” students arrive at college demanding a high-touch, personalized, accelerated two-year college degree, the economic consequences for traditional higher education are enormous, including the eventual end of the Carnegie unit as “ credit hour.” Measuring learning as time spent at 1x will not do.
Universities are going to need to have both “last mile” faculty experts to mentor individual students on personalized coursework beyond what AI can offer from frontier knowledge and methodology to new research questions, and “fast mile” support faculty and staff to work with young students (11-17) who have accelerated to college readiness and students with asymmetrical knowledge, e.g. graduate-level physics knowledge but freshmen-level humanities.
The trajectory hasn’t fully materialized…yet. But paradigm shifts arrive before the comprehensive data. They appear first as anomalies that existing frameworks can’t explain. Alpha School is one such anomaly. The 92% of students using GenAI for coursework are another. The fact that no state measures learning velocity is itself diagnostic: states have built an entire accountability apparatus around a single assumption (steady-state learning pace) that may no longer hold. If even 10-15% of incoming students arrive having experienced systematic acceleration in K-12 within the next decade, universities will face structural problems not solvable with small adjustments. The interdependencies between accreditation, financial aid, faculty roles, and the Carnegie Unit mean these questions cannot be addressed in isolation. Coordinated institutional rethinking should begin now.
Below are the questions university leaders need to be asking to make visible that their institutions are currently optimized for a model (content delivery at standard pace) that may become obsolete within a decade, and underinvested in the value proposition of higher education: frontier mentorship and acceleration coaching.
25 questions for university presidents to ask their teams:
On the Carnegie Unit and Credit Hours.
On Accreditation and External Constraints
(As these questions suggest, accreditation, financial aid, and the Carnegie Unit form an interlocking system designed to ensure standardization and prevent fraud. They lock in a specific model of learning that is time-based, cohort-driven, at a standard rate. You cannot change one part without changing the entire structure.)
On Student Expectations and Experience
On our Economic and Competitive Position

This last question is the only one that really matters. The others are downstream. If universities cannot articulate what they’re selling beyond “the credential” and “time to mature,” then they are vulnerable to any competitor who can provide credentials faster and cheaper. The luxury good model (small, expensive, mentorship-intensive) is economically viable but serves a tiny fraction of current enrollment. The democratization model (technology-enabled content plus guaranteed mentorship) has never been successfully operationalized at scale. The middle ground, where most universities currently operate, assumes students will continue paying premium prices for services they don’t access.
All of these questions assume a certain honesty. So how does a university assess whether students claiming accelerated mastery actually possess competent knowledge versus credentials from variable-quality programs? As competency-based K-12 programs proliferate, quality will vary dramatically. Some will deliver genuine 2-5x learning; others will be credentialing theater optimizing for advancement speed over retention and transfer. Universities will need independent assessment mechanisms (placement testing, diagnostic assessments, portfolio reviews) that don’t simply trust transcripts. But this is a second-order problem. The first-order problem is whether universities are willing to acknowledge that the four-year, time-based degree model is coming to an end.